Skip to main content
guest
Join
|
Help
|
Sign In
G-RSM
Home
guest
|
Join
|
Help
|
Sign In
Wiki Home
Recent Changes
Pages and Files
Members
Main Menu
Home
User list
News
Installation
Installation
RSM area specification
Solving installation problems
Re-compiling
Add new runscript, subroutine and programs
Debug
Benchmark
Tips
Boundary/Initial Data
ECPC data server
Input Data
GSM with NCEP data
SST data files
Persist SST anomaly
Useful Tools
Utilities
GrADS
p2sig
G-RSM output and GRIB
Physics/Dynamics
Convection Schemes
Cloud Water
VIC
River
Albedo
SSBC
GSM-SSBC
Digital Filter
Isotope and Tracers
CO2
Orography Transformation
Projects
SCOAR Model
CaRD10
CaRD10v2
DS for Global Warming
RSM-ROMS coupling
Ref/Description/FAQ
References
NHM References
Links
Release Notes
FAQ
Wish List
Sample Experiments
G-RSM scripts
rsm script
-
using ERA40 data
rsim script
CRSM (C2R) Part I
CRSM (C2R) Part II
gsm script
cases script
grsmrt procedure
gdas script
Incremental Interpolation
GSM Seasonal Forecast
Platforms
Ubuntu Linux
Intel compilers
TeraGrid at SDSC
NCAR machines
COMPAS at SIO
64-bit machines
Dec-Alpha System
TACC machines
Mac
NERSC machines
Unlisted machine
Workshops
La Jolla 2012
Sapporo 2010 (training)
Sapporo 2010 (presentations)
Maui 2009 (training)
Maui 2009 (presentation)
Trieste 2008 (training)
Trieste 2008 (short course)
Past workshops
Benchmark
Edit
0
3
…
0
Tags
No tags
Notify
RSS
Backlinks
Source
Print
Export (PDF)
G-RSM benchmark test
1. MPI Wall Clock Time comparison on IBM-SP Datastar (SDSC)
GSM (T62L28 72-hr), 192*94=18048 grid cells
64CPUs
128CPUs
256CPUs
psplit
22.0sec
19.0sec
23.4sec
GSM (T248L28 72-hr)
64CPUs
128CPUs
256CPUs
psplit
177.8 sec
RSM (128x199 10km 12-hour) 128*199=25472 grid cells
128CPUs
256CPUs
psplit
93.1sec
72.8sec
RSM (512x335 10km 6-hour) 512*335=171520 grid cells
512CPUs
1024CPUs
psplit
181.9sec
338.5sec
Ratio of the number of computational operations between RSM and GSM:
25472/18048x1800/30=1.411x60=85
Ratio of computation time from above table:
91.4 *6 / 19 = 29
This means RSM is 3 times more efficient???
2. MPI Wall Clock Time comparison on COMPAS Linux cluster (SIO)
RSM (128x199 10km 12-hour) 128*199=25472 grid cells
12-hr
24-hr
36-hr
48-hr
psplit_54
473.8
475.1
458.6
468.5
(roughly 2 times slower than IBM-SP)
3. MPI Wall Clock Time comparison on IBM-SP Bluesky (NCAR)
RSM (160x199 10km)
pspl_128: 216 sec/12-hour
Compare to 116.25sec on Datastar (SDSC).
See more comparison:
4. GSM MPI Wall Clock Time Comparison on COMPAS Linux cluster (SIO) - older processors.
5. RSM on IBM Power 4 graphs.
6. RSM on IBM machines at NCAR.
Javascript Required
You need to enable Javascript in your browser to edit pages.
help on how to format text
Turn off "Getting Started"
Home
...
Loading...
G-RSM benchmark test
1. MPI Wall Clock Time comparison on IBM-SP Datastar (SDSC)
GSM (T62L28 72-hr), 192*94=18048 grid cells
GSM (T248L28 72-hr)
RSM (128x199 10km 12-hour) 128*199=25472 grid cells
RSM (512x335 10km 6-hour) 512*335=171520 grid cells
Ratio of the number of computational operations between RSM and GSM:
25472/18048x1800/30=1.411x60=85
Ratio of computation time from above table:
91.4 *6 / 19 = 29
This means RSM is 3 times more efficient???
2. MPI Wall Clock Time comparison on COMPAS Linux cluster (SIO)
RSM (128x199 10km 12-hour) 128*199=25472 grid cells
3. MPI Wall Clock Time comparison on IBM-SP Bluesky (NCAR)
RSM (160x199 10km)
pspl_128: 216 sec/12-hour
Compare to 116.25sec on Datastar (SDSC).
See more comparison:
4. GSM MPI Wall Clock Time Comparison on COMPAS Linux cluster (SIO) - older processors.
5. RSM on IBM Power 4 graphs.
6. RSM on IBM machines at NCAR.